262 research outputs found

    GHOST: experimenting countermeasures for conflicts in the pilot's activity

    Get PDF
    An approach for designing countermeasures to cure conflict in aircraft pilots’ activities is presented, both based on Artificial Intelligence and Human Factors concepts. The first step is to track the pilot’s activity, i.e. to reconstruct what he has actually done thanks to the flight parameters and reference models describing the mission and procedures. The second step is to detect conflict in the pilot’s activity, and this is linked to what really matters to the achievement of the mission. The third step is to design accu- rate countermeasures which are likely to do bet- ter than the existing onboard devices. The three steps are presented and supported by experimental results obtained from private and professional pi- lots

    Authority Management and Conflict Solving in Human-Machine Systems

    Get PDF
    This paper focuses on vehicle-embedded decision autonomy and the human operator’s role in so-called autonomous systems. Autonomy control and authority sharing are discussed, and the possible effects of authority conflicts on the human operator’s cognition and situation awareness are highlighted. As an illustration, an experiment conducted at ISAE (the French Aeronautical and Space Institute) shows that the occurrence of a conflict leads to a perseveration behavior and attentional tunneling of the operator. Formal methods are discussed to infer such attentional impairment from the monitoring of physiological and behavioral measures and some results are given

    Authority management in human-robot systems

    Get PDF
    In the context of missions accomplished jointly by an artifical agent and a human agent, we focus on a controller of the authority dynamics based on a dependence graph of resources that can be controlled by both agents. The controller is designed to adapt the behaviours of the artificial agent or of the human agent in case of an authority conflict occurring on these resources. The relative authority of two agents regarding the control of a resource is defined so as the authority conflict, which appears relevant to trigger authority reallocation between agents as shown by a first experiment. Finally a second experiment shows that beyond the modification of the artificial agent's behaviour, it is also possible to adapt the human operator's behaviour in order to solve such a conflict

    Détection et résolution de conflits d'autorité dans un système homme-robot

    Get PDF
    Dans le cadre de missions réalisées conjointement par un agent artificiel et un agent humain, nous présentons un contrôleur de la dynamique de l'autorité, fondé sur un graphe de dépendances entre ressources contrôlables par les deux agents, dont l'objectif est d'adapter le comportement de l'agent artificiel ou de l'agent humain en cas de conflit d'autorité sur ces ressources. Nous définissons l'autorité relative de deux agents par rapport au contrôle d'une ressource, ainsi que la notion de conflit d'autorité : une première expérience nous montre en effet que le conflit constitue un déclencheur pertinent pour une redistribution de l'autorité entre agents. Une seconde expérience montre qu'au-delà de la modification du comportement de l'agent artificiel, il est effectivement possible d'adapter le comportement de l'opérateur humain en vue de résoudre un tel conflit

    What the heck is it doing? Better understanding human-machine conflicts through models

    Get PDF
    This paper deals with human-machine conflicts with a special focus on conflicts caused by an “automation surprise”. Considering both the human operator and the machine autopilot or decision functions as agents, we propose Petri net based models of two real cases and we show how modelling each agent’s possible actions is likely to highlight conflict states as deadlocks in the Petri net. A general conflict model is then be proposed and paves the way for further on-line human-machine conflict forecast and detection

    Premières pistes pour l'autonomie adaptative sans niveaux

    Get PDF
    Dans le cadre de la supervision de mission d'un ou plusieurs agents artificiels (robots, drones...) par un opérateur humain, la question du partage des rôles et de l'autorité est une problématique avérée. En effet, un équilibre doit être trouvé entre le contrôle purement manuel des engins qui permet en général d'avoir une grande confiance dans le système mais qui soumet l'opérateur humain à une charge de travail importante, et l'autonomie totale des engins qui offre moins de garanties en environnement incertain et de moins bonnes performances. L'autonomie ajustable (ou adaptative) basée sur les niveaux d'autonomie semble être une réponse à ce type de problème. Cependant, ce type d'approche n'est pas exempte de défauts : les niveaux constituent des modes de partage d'autorité et de répartition des tâches rigides et prédéfinis, sans compter le manque de recul concernant les apports de l'opérateur trop souvent considérés comme uniquement bénéfiques. Nous présentons les concepts élémentaires d'une approche destinée à adapter dynamiquement l'autonomie d'un agent relativement à un opérateur humain, non pas axée sur l'utilisation de niveaux d'autonomie mais sur la base de la gestion des ressources et des conflits d'utilisation de ces ressources

    Authority sharing in human-robot systems

    Get PDF
    In the context of missions accomplished jointly by an artifi cal agent and a human agent, we focus on a controller of the authority dynamics based on a dependence graph of resources that can be controlled by both agents. The controller is designed to adapt the behaviours of the artifi cial agent or of the human agent in case of an authority conflict occurring on these resources. The relative authority of two agents regarding the control of a resource is de fined so as the authority conflict, which appears relevant to trigger authority reallocation between agents as shown by a fi rst experiment. Finally a second experiment shows that beyond the modifi cation of the arti ficial agent's behaviour, it is also possible to adapt the human operator's behaviour in order to solve such a conflict

    Petri net-based modelling of human–automation conflicts in aviation

    Get PDF
    Analyses of aviation safety reports reveal that human–machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew–automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via ‘hidden’ mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired ‘hidden’ transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human–automation conflicts. Practitioner Summary: We propose a Petri net model of human–automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches are complementary to identify and assess the criticality conflicts
    corecore